Skip to main content

Full text of "Surround sound for the DAW owner"

See other formats


SURROUND SOUND FOR THE DAW OWNER 

by 
MICHAEL JOHN BAYLEY 
B.A., Pomona College, 2008 



A thesis submitted to the 

University of Colorado Denver 

in partial fulfillment 

of the requirements for the degree of 

Master of Science 

Recording Arts 

2012 



This thesis for the Master of Science 

degree by 

Michael John Bayley 

has been approved for the 

Master of Science in Recording Arts Program 

by 

Lome Bregitzer, Chair 
Leslie Gaston 
Ray Rayburn 



October 23 rd , 2012 



11 



Bayley, Michael John (B.A., Sound Technology) 

Surround Sound for the DAW Owner 

Thesis directed by Assistant Professor Lome Bregitzer 

ABSTRACT 

The purpose of this thesis portfolio project is to broaden the population of 
surround sound listeners and producers by presenting an eight-song album of multi- 
channel audio in the form of Pro Tools session files, so that people who own a Digital 
Audio Workstation (DAW) and an audio interface with more than two outputs are 
encouraged to set up their own three, four, or five-channel surround sound environment 
with equipment they already have at home. This is both an experiment in audio 
advocacy, and a showcase of my best production work. The idea is that while 5.1- 
channel surround sound is already in place in home theater and elsewhere, there still 
exists a growing and untapped population of people who have a basic home studio setup 
that they have not yet realized allows for playback and creation of multi-channel audio. 
All that these end users must do to listen to my project (and begin mixing in surround 
sound themselves) is plug in anywhere from one to three additional speakers to their 
audio interface's additional outputs, put my data DVD in their disc drive, and open the 
appropriate session files for the number of speakers they have. There they will find 
multi-channel stems of all the different instruments and vocals from my recordings, pre- 
routed and mixed for their surround sound enjoyment. If they would like to make 
changes, such as moving the location of an element in the mix, they can do so freely. 
Now, the artist's foot is in the door, and the hope is that they will be inspired to create 



in 



and share surround mixes of their own using their new home setup. The written portion 
of this thesis presents further explanation and justification for the project idea, as well as 
thorough documentation of how the end product was created. 



The form and content of this abstract are approved. I recommend its publication. 

Approved: Lome Bregitzer 



IV 



DEDICATION 

I dedicate this thesis to Leslie Gaston, whose Surround Sound course got me 
excited enough about surround sound to come up with a way to make my own surround 
setup at home, even when the cost of an "official" setup was out of reach. That 
inspiration is what led to this thesis project. 



ACKNOWLEGEMENT 

Many thanks to my advisor, Lome Bregitzer, for always being supportive of my 
work and my ideas throughout my time at University of Colorado Denver. While the 
criticism I've received throughout the program has certainly helped me improve as an 
engineer, I really needed the positive encouragement I got from Lome to have enough 
confidence to continue creating. Cheers my man. 



VI 



CHAPTER 



TABLE OF CONTENTS 

I. INTRODUCTION 1 

Purpose of the Project 1 

Scope of the Project 3 

Speaker Placement 3 

Calibration 5 

Limitations 6 

II. PROJECT DOCUMENTATION 8 

Artist/Group Information 8 

Tracking 10 

Instrumental Tracking (Tracks 1-4) 10 

Instrumental Tracking (Tracks 5-8) 16 

Vocal Tracking (Tracks 1-8) 21 

Keyboard Tracking (Tracks 1-8) 22 

Other Tracking Notes 23 

Editing 26 

Stereo Mixing 28 

Phasing 28 

Panning 30 

Equalization 31 

Dynamics 34 

Delays and Reverb 36 



vu 



Levels 37 

Mastering 38 

Surround Mixing 40 

III. CONCLUSION 42 

BIBLIOGRAPHY 44 



Vlll 



LIST OF FIGURES 

Figure 

1 . 1 Three-Speaker Arrangement 

1 .2 Four-Speaker Arrangement 

1.3 Five-Speaker Arrangement 

2.1 Kick Drum 

2.2 High/Rack Tom 

2.3 Low/Floor Tom 

2.4 Snare Drum 

2.5 Drums, Overheads 

2.6 Drums, Room Mikes 

2.7 Guitar 1, All Mikes 

2.8 Bass, Close Mikes 

2.9 Bass, Room Mikes 

2.10 Drums, All Mikes 

2.11 Guitar 2, All Mikes 

2.12 Bass, Live Tracking 

2.13 Vocals, Close Mike 

2.14 Vocals, Room Mike 



IX 



CHAPTER I 

INTRODUCTION 

Purpose of the Project 

If a great surround sound recording is made in the studio, but no one else ever 
hears it, does it make a sound? Much has been written about the problematic history of 
surround sound that has prevented it from gaining more widespread popularity in the 
consumer market (Holman, 1; Sessions, 1; Rumsey, x; Glasgal, Yates 93). Competing 
formats, high start-up costs, and difficulties with placing the additional speakers amounts 
to a series of obstacles that many consumers are not willing to overcome. For the 
purposes of this thesis, however, I would like to set the discussion of the average 
consumer aside, and instead focus on a unique population of potential surround sound 
listeners: Digital Audio Workstation (DAW) owners. 

The number of people with home recording studios has been growing rapidly over 
the last five years (Denova, 8). Included in this population are the literally hundreds of 
thousands of students currently enrolled in Recording Arts programs, as well as those 
who have already graduated, who purchased recording equipment as part of their 
education (Education-Portal.com). Unlike the average consumer, whose focus may be 
directed more toward surround sound for home theater than for music alone, this 
population of studio owners has already shown their interest in music by the fact that they 
have invested in music recording equipment for their home. These days, even the most 
basic studios must have some form of audio interface in order to get the audio from the 
computer to the speakers (Collins, 19; Harris, 43). A survey of the current market shows 
that a large percentage of these interfaces offer more than two outputs (Sweetwater.com). 



Thus, we have a large, growing, untapped population of potential surround sound 
listeners who need only had another speaker or two to their setup to begin enjoying 
surround sound music. Lastly, since it is common, recommended practice to have two 
sets of studio monitors to compare mixes on, many of these studio owners will already 
have the additional speakers in their possession (Owsinski, 75). 

The basic setup is simple. A person with a computer, Digital Audio Workstation 
(such as Pro Tools), and an audio interface can simply plug additional speakers into the 
additional audio outputs on their interface, place the surround speakers how they please, 
and route various channels of audio to the different speakers in their DAW. Now the 
drummer can be behind you, rich guitar layers can wrap around you, and voices can echo 
come from all directions. Fading the sound level down in one speaker and up in another 
creates an exciting dynamic pan across the room. All of a sudden, your audio experience 
has become three-dimensional. 

Like many good ideas, this one is simple, but not obvious. Chances are, most 
studio owners do not realize they can do this with just a native Pro Tools setup. The 
entry cost for an "official" consumer-level surround sound setup is priced at over $2000 
in software alone (Avid). Although running an actual surround session in Pro Tools does 
afford the engineer some additional benefits in the way of convenience, such a dedicated 
dynamic panner, surround sessions are also very taxing on the computer's resources, and 
some consumer systems would not be able to handle a full session. For those who 
already have surround sound up and running, cheers. For those who do not, this project 
is for you. 



Scope of the Project 

My project is an eight-track album of rock music, presented in three, four, and 
five-channel surround sound, in the form of Pro Tools 9 sessions on DVD data disc. 
Stereo files are included as well. If you own Pro Tools, simply put the data disc in your 
computer's disc drive, open a song folder, and select the session representing the number 
of speakers you have. Within the session, you will find multi-channel stems of the 
different instruments and vocals on different tracks, allowing you to solo or mute 
instruments and make changes as you please. 

Speaker Placement 

I have produced the tracks with the following speaker arrangements in mind. 
Channel numbers are in bold, and ideally, all speakers should be equidistant from the 
listening position. This can be achieved by using a string or cable to measure the 
distance from the listening position to each speaker. 

Three Speakers 

1 (front left) 2 (front right) 

A (listening position) 

3 (rear center) 
Figure 1.1: Three-Speaker Arrangement 



Four Speakers 

1 (front left) 2 (front right) 

A (listening position) 

3 (rear left) 4 (rear right) 

Figure 1 .2: Four-Speaker Arrangement 

Five Speakers 

3 
1 (front left) (front center) 2 (front right) 

A (listening position) 

5 (rear left) 6 (rear right) 

Figure 1.3: Five-Speaker Arrangement 

My decision to use these arrangements and channel orders is based on the 
following logic. With three speakers, one way to arrange the speakers would be to have a 
left, a right, and a center channel, in what is called "3-0 stereo" (Rumsey, 83). However, 
this is not really a surround setup, and if the user has only three speakers, I feel that their 
experience will be much more exciting if material can be coming from the back. In the 
rare case that the user's audio interface only has three outputs, this channel order ensures 
that the session is already routed correctly. With four speakers, another option would be 
to arrange them with a left, a right, a center, and one surround, in what is called "3-1" 
stereo, or "LCRS Surround" (Rumsey, 84). Although this is used more commonly than 
the "Quadraphonic" arrangement I chose for my project, I simply find the square speaker 
arrangement more interesting for music. Since the intention of this project was never to 



facilitate the most "ideal" or commonly-used surround setup, I chose the arrangement 
that allowed for more unique and interesting panning possibilities, rather than the most 
stable frontal stereo image. As for the channel order, I used the same reasoning that 
channels 1-4 would need to be the ones used in case the end user's audio interface only 
has four outputs. With five speakers, though, I wanted the channel order to correspond to 
the ITU standard for 5.1 surround sound (Holman, 122). You will see that channel four 
is skipped in this arrangement, due to the fact that I did not employ the LFE channel, as 
discussed in the section titled "Limitations" below. 

Calibration 

"Calibration is the process of fine-tuning a system for the particular parameters of 
the situation" (McCarthy, 429). In our case here, every situation will be different, as user 
are encouraged to scrap together whatever speakers they have available to make a 
surround setup. Since the focus of this project is simply getting new users started with 
surround sound, with the fewest obstacles possible, calibration should be a quick, simple 
process, aimed at getting different speaker levels "in the ballpark," so that the mixes 
come across more or less as intended. Below are some rough guidelines for how to do 
this. 

To calibrate the speaker levels, one can use a decibel meter (including the ones 
now available in Smart Phone app stores), or worst case, one's ears. I used an iPhone app 
called "Decibel Meter Pro 2," sold as part of a package called "Audio Tool." Included 
with each disk worth of tracks is a folder labeled "calibration," containing sessions for 
three, four, and five-channel calibration. Each session consists of tracks with Pro Tools 



Digirack noise generator plug-in set to output pink noise. After choosing the session 
corresponding to the number of channels you plan to use, un-mute channel 1 . You 
should hear pink noise coming out of your front left speaker. Using a decibel meter, hold 
the device in the listening position, facing speaker #1. Standard calibration levels for 
music are in the range of 78-93dB (Holman, 69). For the purposes of enjoying my 
project, however, you can simply adjust the speaker to any desired level and get a 
reading. Now solo channel 2, point the decibel meter at speaker #2, and adjust the 
speaker level until the meter reads the same level as speaker #1 . Chances are, you will 
only need to do this for the additional channels (speakers #3, 4, and/or 5), in order to 
match their level to the two stereo speakers you already have. If you do not have a 
decibel meter available, just do your best to get the pink noise sounding roughly the same 
volume from the listening position when played through each of the speakers you set up, 
and you are good to go. 

Limitations 

Perhaps the biggest limitation in using a discrete surround setup like the ones 
suggested for this project is the lack of a dedicated dynamic panning tool. Dynamic pans 
are certainly still possible (and used within my project), but they require fading the level 
down in one channel and up in the another, rather than simply moving a joystick. For 
two-channel panning, this is not too much of a burden, but it does make certain effects, 
like circular pans across all the speakers, much more difficult to pull off. 

Another limitation in suggesting that users simply add whatever additional 
speakers they have to make a surround setup is that ideally, all speakers in a surround 



setup are supposed to be identical (Rumsey, 89). Once again though, the purpose of this 
project is to get people's feet in the door to surround sound, not to worry about ideals. 
Many surround speaker packages sold today are unmatched anyway. 

Although I initially considered presenting the project in multiple DAW formats, 
such as Logic or Ableton, I ultimately chose to present in Pro Tools alone. Pro Tools is, 
after all, the world's leading DAW (Collins, 1). I also do not own Logic or Ableton. 
Space was a consideration as well, considering that the project already contains 3 data 
DVDs worth of information as it stands. Presentation in other DAW formats would be 
worthwhile future work. 

Lastly, I chose not to employ the sixth "0.1" channel for Low Frequency Effects 
used in standard 5.1 setups. Admittedly, this was in part due to the fact that my 
subwoofer broke partway through this project. Additionally, though, I find that proper 
calibration of a subwoofer in a room is already the most difficult part of setting up a 
monitoring system, and then sending more bass to it in a separate channel complicates the 
issue further. Adding this sixth channel presents new routing challenges for the end user 
as well. I wanted there to be the fewest obstacles possible in order for new users to begin 
enjoying surround sound in their homes, so I capped the channel number at five. 



CHAPTER II 
PROJECT DOCUMENTATION 
Artist/Group Information 
Group: "We's Us" 

We's Us was formed from the melting pot that is the Denver music scene. Started 
by Michael "WeezE" Dawald, the members of We's Us have played together under many 
different names and decided to create something new. All of the members come from 
wide-ranging musical backgrounds and geographic regions. All eight songs on this 
album are originals written by Michael "WeezE" Dawald, with contributions from the 
artists listed below. The following is a list of all the artists that performed on this album. 

Performers: 

Michael "WeezE" Dawald - Guitar, Bass, Vocals 

Blake Manion - Drums 

Seth Marcus - Bass 

Michael Bayley - Keys 

Terone McDonald - Drums 

Lawrence Williams - Percussion 

Patrick Dawald - Vocals 

When I first met WeezE in the summer of 201 1, he was performing with Blake 
and Seth as a three-piece group under the name "Eager Minds." I agreed to record the 
group for my thesis project. Before the sessions began that fall, the group disbanded over 



personal differences. However, they agreed to come together again to lay down the first 
four tracks appearing on this album. For the next four tracks, WeezE decided to fly in his 
long-time friend and professional drummer Terone. WeezE played the bass parts himself 
on all of the next four tracks, except "Metaphor," which features Seth on bass again. 
Lawrence then added hand percussion on "State of Mind," "Aaron," "West," and "In the 
Clouds." After all other instruments were tracked, I added various organ and synth parts 
to all of the tracks, except "State of Mind," which I kept as-is. 

Today, We's Us is playing at venues all across town, including Cervantes 
Masterpiece Ballroom, Larimer Lounge, Lion's Lair, Hi Dive, Herman's Hideaway, and 
more. The current line-up features WeezE on guitar, Blake back on drums, and a close 
friend of WeezE 's on bass, by the name of Chris Crantz. I performed with the group on 
keys for the first few shows, but ultimately was unable to commit fully, due to my full- 
time job and obligation to complete this thesis project. I look forward to working with 
these artists again in the future, as well as the opportunity to pursue more production 
projects of my own. 

Producer/Engineer: Michael Bayley 

I tracked, edited, mixed, and mastered all eight tracks myself. Details on each 
part of this process are explained below. 



Tracking 
Instrumental Tracking (Tracks 1-4) 

Control Room: Studio H (Rupert Neve Portico) 
Tracking Room: Arts 295 
DAW: Pro Tools 9/10 
Interface: Digidesign 192 
Drum Tracking: 9/23/11, l-10pm 




Figure 2.1: Kick Drum 



Figure 2.2: High/Rack Tom 




Figure 2.3: Low/Floor Tom 



Figure 2.4: Snare Drum 



10 



Figure 2.5: Drums, Overheads Figure 2.6: Drums, Room Mikes 

Microphone List: 

(l)AKGD112(Kick) 

(2)ShureSM81s(Toms) 

(1) Sennheiser 441 (Snare, top) 

(1) Beyer M500 (Snare, bottom) 

(2) AKG C414s (Overheads) 
(2) Neumann U87s (Rooms) 

I have been polishing my drum mic setup for some time now. For kick, I have 
tried everything from one microphone to three, and found that a single Dl 12 gets enough 
attack and thump for my desired sound. As with all mikes, I have the drummer 
repeatedly hit the drum, while another person slowly moves the microphone around until 
I hear the sound I like in the studio. For the kick, this normally results in a placement 
partway into the hole on the drum, as shown above. 

For the toms, I have traditionally used Sennheiser 421s, but after having a 
problem with one of them in this session, I switched to Shure SM81s. I couldn't be 
happier- these small-diaphragm condensers provided a much more crisp-sounding attack 
on the toms than the 421s did, and although they picked up a little more noise from the 

11 



rest of the kit, I believe this sharp transient response helped when gating the torn mikes 
during mixing. I plan to use these mikes for future drum tracking sessions, and would be 
interested to hear how they sound on other instruments. 

Snare is the drum I have had the hardest time getting to sound how I like. I have 
used the traditional Shure SM57 on top, but have always ended up cranking the highs in 
mixing, often resulting in unpleasant distortion. I found that the Sennheiser 441 's 
brighter frequency response brought me closer to my desired sound from the get-go. For 
the bottom, the Beyer M500 ribbon microphone has won out for me over time. Although 
I do usually end up scooping out some low-mid "mud" in this microphone, the ribbons 
seem to deliver a smoother-sounding representation of high-end material like snares, 
which can otherwise sound brittle, especially when mixed with the other microphones. 

For overheads, I immediately fell in love with the AKG C414s. These are 
perhaps my favorite microphones that I have ever used. The "bump" in their high-end 
frequency response around 12.5kHz helps make cymbals shimmer, and when placed as a 
stereo split pair, the entire kit from kick to crash seems to come through crystal-clear. In 
his book Shaping Sound: in the Studio and Beyond, Gottlieb makes specific reference to 
the quality of the AKG 414 in this capacity, pointing out that these microphones are 
"excellent in most situations, particularly for instruments that need a strong edge at 
higher frequencies" (Gottlieb, 142). 



12 



Guitar Tracking: 9/24/11, l-9pm. 10/1/11. 2-10pm, & 10/15/11, 12-8pm 




Figure 2.7: Guitar 1, All Mikes 

Microphone List: 

(2) Beyer M500s (Close) 

(2) Neumann TLM193s (Mid, front) 

(2) Neumann U87s (Rooms, rear) 

This was the first time I tried recording guitar using two amps simultaneously. 
WeezE brought both amps in so we could decide which we liked better, but upon 
listening, I felt that the ideal sound would come from a combination of the two. The 
sound he liked to get from his Fender tube amp had a brighter high end, but could boarder 
on being harsh to the ear. The old Peavey amp sounded very warm, but lacked definition 
in the upper range. My decision to use both amps (and six microphones) was based 



13 



partly on my knowledge that these tracks would ultimately be presented in surround 
sound. Note the seventh microphone, seen at the bottom, was used only for talk-back. 

I was very happy with the sound we got. The guitar heard in the intro to "2012" 
is a good example of a single guitar take coming from both amps. "State of Mind" 
features a very full-sounding guitar double, where each side of the stereo field contains a 
different take from each amp. 

There is definitely still some amp noise that can be heard on the record, but I do 
not feel that it is enough to detract from the enjoyment of the sound. Tightening some 
screws on the Peavey amp certainly helped reduce the low-end rumble that it had when 
we first set it up, and turning down the lows on this amp's EQ reduced this noise further. 



Bass Tracking: 10/22/11, 12-5pm 




Figure 2.8: Bass, Close Mikes 



Figure 2.9: Bass, Room Mikes 



14 



Microphone List: 
(l)AKGDl 12 (Close, top) 

(1) Sennheiser 421 (Close, bottom) 

(2) Neumann U87s (Rooms) 

The bass for these recordings was actually re-amped from DI sessions we did the 
previous week at my house using my Apogee Ensemble and Pro Tools 9. This is a 
strategy I am very likely to use in the future: record bass, guitar, keys, etc. DI at my 
house, then send those clean signals out through the amp and any outboard processing 
gear the artist has once we are in the studio. During the DI sessions, the signal is split, so 
that the interface sees only a clean DI signal from the instrument, while the artist hears a 
fully processed/amplified version of their sound in the room while they record. 

This technique has a number of benefits. For one, it cuts down greatly on the time 
needed in a professional studio, because once the takes are edited, it only takes the length 
of the song to record the final sound. Editing is also a little easier using only a DI signal, 
because one does not have to consider the tail ends of reverberation from the room in 
various mikes as the takes are spliced together. Finally, if the artist or producer is 
unhappy with the sound that was achieved in the studio, the performance is still intact, so 
that a new sound can later be achieved using the same performance. 

The downsides are subtle, but worth noting. There is a slight degradation in the 
precision of the audio with the additional analog-to-digital and digital-to-analog 
conversions that take place in this process. Also, some artists may perform better in a 
professional studio setting, surrounded by high-end equipment, with the feeling that "this 
is the final sound." For me personally, I like the convenience of the other approach. 



15 



Instrumental Tracking (Tracks 5-8) 

Control Room: Studio H (Rupert Neve Portico) 
Tracking Room: Arts 295 
DAW: Pro Tools 9/10 
Interface: Digidesign 192 

Drum Tracking: 1 1/5/11, 11am- 7pm 




Figure 2.10: Drums, All Mikes 

Microphone List: 

(l)AKGD112(Kick) 

(3)ShureSM81s(Toms) 

(1) Sennheiser 441 (Snare Top) 

(1) Beyer M500 (Snare Bottom) 

(2) AKG 451s (Overheads) 
(2) Neumann U87s (Rooms) 



16 



For the drum sessions with Terone, I used a nearly identical mic setup. The only 
difference was that I had to use the AKG 45 Is as overheads instead of the C414s, due a 
theft that had occurred the previous weekend in the studio. This change, although it was 
not by choice, did have its merits. The cymbals, particularly the hi-hat, sounded even 
more clear and crisp in tracking with the 45 Is. I also found that I did not need to scoop 
out as much of the low end in the 451s as I did with the 414s later in mixing. 

Both drummers I recorded were exceptional in different ways. Blake was very 
dynamic, and had some very beautiful transitions and build-ups planned throughout these 
tracks that he knew so well. Terone was exceptionally consistent, particularly with how 
he hit the snare. This dynamic consistency made processing of his drums much easier, as 
I found that I did not need as much dynamic compression. Both were able to supply 
some fantastic drum fills, and in Terone 's case, a very wide variety of them. 

Guitar Tracking: 11/19/12, 2-10pm & 12/10/12, 2-6pm 




Figure 2.11: Guitar 2, All Mikes 



17 



Microphone List: 

(2) Beyer M500s (Close) 

(2) Neumann TLM193s (Mid, front) 

(2) Neumann U87s (Rooms, rear) 

For the second set of four tracks, WeezE decided to borrow his friend's Mesa amp 
in hopes of achieving an even better guitar sound. Although there was only one amp 
being used this time (the Peavey amp to the right of the picture was not being used this 
session), I decided to still use the same six microphones to track the sound. 

I personally was not as happy with the sound we got with this amp. For one, 
although there was not as much amp noise when the guitarist was not playing, the left 
speaker on the amp had a low-end rumble during play that I could not tame to a 
satisfactory level. Secondly, I was unable to achieve the same fullness in the stereo 
image during mixing that I had been able to with the two-amp setup. Lastly, I found that 
more equalization was needed to create a tone that I found pleasing, whereas with the two 
amps, the combination of their two tones blended into one that was much closer to my 
desired sound from the start. In future projects with WeezE, I will ask that we use his 
amps. 



18 



Bass Tracking: 12/10/12, 6- 10pm 



i 


1 ' HL 


™> JMj 


99 v i "*TTT 


Jt 


r V^T^' J^^ 





Figure 2.12: Bass, Live Tracking 
Microphone List: 
(l)AKGDl 12 (Close, top) 

(1) Sennheiser 421 (Close, bottom) 

(2) Neumann U87s (Rooms) 

This was a very efficient day in the studio. For the first half, we finished off the 
guitar for the second set of four tracks. For the second half, I got the identical amp and 
microphone setup ready from the previous bass-tracking session, ran three of the four 
tracks worth of bass through (from DI recordings WeezE had done at my house during 
the previous weeks), and still had time for Seth to perform bass live in the studio for the 
final track. 

"Metaphor" was the only track where bass was performed live in the studio, rather 
than being re-amped through the process described above. I could not say whether I like 
the sound any better. Comparison was difficult, for a number of reasons. For one, Seth 



19 



played in a somewhat different style for this track than he did for some of the others, 
including a slap-bass section, which required an adjustment of the pre-amp level to 
prevent clipping. Also, between the drummer change, guitar amp change, and having 
WeezE play bass on three of these four tracks, we decided early on that this would be a 
two-sided EP rather than a cohesive set of eight tracks. Thus, I used the opportunity to 
try some different processing for each set of tracks, including for the bass. 

What I do know for sure was that Seth seemed to be feeling much more pressure 
recording in the studio, with limited time, than he did at my house. He expressed a fair 
amount of frustration with himself during this session. Also, as stated above, editing was 
indeed easier for me with the DI takes. For these reasons, I prefer the re-amp approach. 

Percussion Tracking: 11/19/12, 6-8pm 

I unfortunately did not obtain a photo of the percussionist and his setup in the 
short time we had him in the studio. He appeared for a two hour window during our 
November 19 l guitar session, recorded four tracks worth of percussion, and then had to 
leave promptly for a show. I pulled from the same set of microphones that I had used for 
the drums and the guitar to mic his percussions set. 
Microphone List: 
(1) AKG Dl 12 (Low conga, bottom) 

(1) AKG C414s (High conga, top) 

(2) Beyer M500s (Bongos, top) 

(2) Neumann TLM193s (Close, narrow split pair) 
(2) AKG 451s (Overheads, wide split pair) 
(2) Neumann U87s (Rooms) 



20 



This was my first time miking a full-fledged percussion setup like this. 
Admittedly, ten microphones was probably overkill, but I was able to pick up every part 
of his kit as hoped, including the wind chimes and shakers. I also found that the wide 
variety of frequency responses obtained from the large mic collection resulted in a very 
balanced response that ultimately did not require any equalization in mixing. The overlap 
in mic selection between this and previous sessions helped the percussion blend in easily 
with the rest of the instruments. 

Vocal Tracking (Tracks 1-8) 

Control Room: Studio H (Rupert Neve Portico) 

Tracking Room: Arts 295 

DAW: Pro Tools 9/10 

Interface: Digidesign 192 

Vocal Tracking: 3/9/12, 2-10pm & 3/10/12, 2-10pm 




Figure 2.13: Vocals, Close Mike Figure 2.14: Vocals, Room Mikes 



21 



Microphone List: 

(1) ADK Area 51 Tube Mic (Close) 

(2) Neumann U87 AIs (Rooms) 

The vocals for all eight tracks were recorded in a period of two days. We flew in 
WeezE's cousin, Patrick Dawald, to sing lead on the majority of the tracks. WeezE sang 
the lead on "Fall Haze," and "Aaron," and "Drama." 

We did a microphone shoot-out to decide which mic sounded best with Patrick's 
voice. We compared the ADK Area 5 1 and Rode K2 (both tube mikes), as well as the 
Neumann U87 AI, Audio Technica AT4040, and Shure Beta 58. After deciding on the 
Area 51, we were able to commit the studio's two U87 AIs to the room. The Area 51 
was also well-suited for WeezE's voice, and as we were interested in keeping the sound 
consistent, we had him use the same setup. 

Patrick was able to lay down most of the tracks fairly quickly, usually in three to 
five takes. I could tell that his accuracy in pitch would require very little Auto Tune, if 
any, as was the case later in mixing. For the tracks that Patrick was less confident in, I 
very much found myself in the role of producer- coaching him through lyrics, helping get 
his confidence up, and making some artistic decisions as to the delivery of the lyrics. 

Keyboard Tracking (Tracks 1-8) 

Control Room: Home 
Tracking Room: Home 
DAW: Pro Tools 9/10 
Interface: Apogee Ensemble 
Numerous sessions, April-June 2012 



22 



Once all other instruments were tracked, I began adding some keyboard parts of 
my own to help fill out the tracks. I recorded these DI into my Apogee Ensemble at 
home. Had I finished the other parts sooner, I might have liked to re-amp these parts 
through my Roland amp and mic it up in the studio, much like we did with the bass. 
Things being as they were, the studios were closed for the summer, and DI had to suffice. 

Each time I came up with a part, I had WeezE listen to confirm or deny it's 
addition. I ended up adding keys to every track except "State of Mind." The keyboard 
parts I had for that song went through numerous revisions, until I finally gave up and 
decided to release the track without keys. Thanks to my additions on the other tracks 
though, I was invited to play a number of shows with the band before the deadline 
approached to complete this thesis documentation. 

Other Tracking Notes 

There was a fair amount of planning and foresight that went into my tracking 
technique, and a few aspects of this are worth discussing in more detail. In the following 
paragraphs I will address my overall strategy for placement of instruments, mikes, and 
baffles in the room, as well as my decision not to use a metronome (or "click") for 
tracking these songs. 

Before I even began placing the instruments on microphones in the room, I took 
time to explore the acoustic space using clapping and my voice. I knew that I wanted a 
big room sound on the record, so that very little reverb would need to be added in mixing. 
Although I have used some of the best reverb plug- ins available, historically, I have 
found that this is the type of plug-in I am least satisfied with. Understandably, reverb is 
very difficult to emulate, considering that the sound consists of "thousands or even 



23 



millions of individual reflections that can each sound slightly different and arrive at 
different times" (Shepherd, 182). So, when a large, treated room like Arts 295 is 
available, I prefer to have the majority of the reverberation occur naturally in the room 
itself. Thus, I did not place any baffles around the instruments, as you can see in the 
photos. Instead, I used the baffles to address some of the odd echoes and rattling I could 
hear in the corners. In one corner, I found that a particular arrangement of three baffles 
created a pleasing "slap-back" echo that sounded especially good on the snare drum. I 
chose to have all the instruments in the center of the room, facing the corner that had the 
echo. 

As for microphones, I generally used the rule of thumb for surround sound that 
there should be at least as many microphones as there are channels in the final 
presentation (Robjohns, 4). This definitely meant a greater number of microphones on 
each instrument than I was used to using, but I found that this approach afforded me 
some distinct advantages. As I mentioned with the percussion, I found that having a wide 
variety of frequency responses from the different microphones helped to create a 
balanced presentation of each instrument, rather than having the sound heavily "colored" 
by the frequency response of just one microphone. This worked well for my desired 
sound, which I hope comes across as more natural and live than artificial or processed. I 
also liked the ability to exclusively hard pan the different mikes, reducing the potential 
for phase distortion, and create a perceived location of an instrument in the stereo field by 
adjusting the relative levels of the mikes rather than moving the pan position. 

My microphone placement was based very much on prior experience, as well as 
experimentation during each tracking session. Dynamic microphones were consistently 



24 



placed off-axis, and condenser microphones were consistently used as split pairs. While I 
have experimented with many other techniques in previous projects, I have been happiest 
with the ones I chose here. The ultimate rule with microphone placement was, of course, 
to use the placement that sounded the best. For the drums, I used a technique similar to 
what some would call "Recorderman Technique" (Des, 1). I adjusted the position of the 
overheads until I was able to measure that they were both equidistant from the kick and 
the snare. This helped ensure that the close mikes were in phase with the overheads. 

My decision not to use a metronome turned out to be very much a double-edged 
sword. We began experimenting with and without a click during our early drum-tracking 
sessions, and for Blake, his performances were consistently better without one. Purely 
looking at the performances we came away with, I cannot complain too much about the 
rhythmic consistency. I think any deviation in tempo in these tracks is often offset by the 
expressiveness of the performances. "Drama" is a particularly good example of a 
dynamic track that benefits from the freedom to deviate slightly above and below the 
standard tempo of the song, helping to create moments of calm and excitement. If you 
listen closely, the tempo of "In the Clouds" ends up significantly faster than it begins. 
While this was a concern early on, I feel that with all the other instruments in place, the 
listener is brought along smoothly for the ride from dreamy-slow to rocking-fast. "West" 
has some noticeable "hiccups" in tempo that would have been far easier to edit with a 
click, but in the grand scheme of the listening experience, I do not consider them too 
major. In this day and age, I find it refreshing when a record shows any sign that a song 
was actually performed "live" rather than in tempo-corrected pieces, something that has 
become increasingly rare in the past few decades. More than half of these songs consist 



25 



of one complete drum performance, and the greatest number of edits made to any drum 
performance was four. 

The biggest downside in not using a click was in the tracking and editing process 
for the other instruments. There were numerous segments that could have easily been 
copied and pasted had we recorded to a grid, but instead I had to come up with a unique 
and tempo-accurate take for every single section of every song. This time-consuming 
burden alone may steer me back toward using a click for tracking future projects. When 
all is said and done though, it is nice to be able to say that at every moment of every song, 
the listener is hearing a unique performance from each instrument. 

Editing 

Editing is the part of the process that feels the most like work to me. It does 
indeed involve artistic choices, and sometimes choosing between a couple guitar solos or 
vocal takes can be fun, but most of the time I am looking to finish as quickly as possible 
to that I can begin mixing. I did become much more efficient at editing during this 
project. This was the first time I began using the "playlist view" in Pro Tools, where 
every take on a given track is displayed at once, in different colors. This made the 
"comping" process much easier. Although the term "comping" has other meanings in 
music, in audio production, a "comp track" is "an audio track composed of segments 
copied from other tracks, for instance to combine the best portions of various recorded 
takes of the same performance" (Sams, 40). In some instances, such as with the drums, 
and the lead guitar on "Drama," I was able to use whole takes. On the opposite end of 
the spectrum, however, I found myself needing to splice together more than 40 different 
pieces of percussion in order to have the comp track fit tightly with the drums. 



26 



As I mentioned above, the fact that I did not use a click track during tracking 
came back to haunt me in the editing process. I estimate that editing would have taken at 
most half the time that it did would I have been able to copy and paste segments across 
the tracks. I do not necessarily regret not using a click track, for the reasons described 
above, but for efficiency's sake in future projects, I am likely to recommend that we do. 

There were some pleasant surprises in editing, like the fact that I was often able to 
get complete guitar doubles from only three takes of rhythm guitar. Other times, I really 
struggled to fill in the gaps. Another repercussion of not recording to a click was that it 
was often difficult to begin playing the intros on time in overdubbing. In some cases I 
was able to get Blake to do a count-off with the drums, and in others I was able to edit a 
form of count-off onto the front end of the track, but there were still places where for 
some reason I never came away with a solid intro other than the scratch guitar. This is 
why both "Aaron" and "In the Clouds" begin with a different, thinner-sounding guitar 
than what is heard in the rest of the song. I did my best to make this sound like an 
intentional effect, but whether or not I succeeded is for the listener to decide. 

Probably the most challenging of all editing tasks was putting together the vocals 
on "Drama." As you can see in the session, the vocals are on quite a few different tracks. 
In some cases, this is because multiple layers of vocals are playing back simultaneously, 
but for the most part, this was necessary because the final performance came from three 
different sessions (including one scratch vocal session that I did not refer to in the 
tracking section), requiring separate processing in order to make the takes sound 
cohesive. Sometimes WeezE's best take for the first part of a line would come from one 
vocal tracking session, and then the only good take of the rest of line would come from 



27 



another. Blending these was challenging. Sometimes I was able to get a good vocal 
double for a line from WeezE, and other times I needed to use one from Patrick. In the 
end, though, I am happy with the final performance as it appears on the record, and I 
think that track overall is perhaps the strongest on the album. 

Stereo Mixing 

To me, mixing is the fun part. Mixing is where the engineer uses a wide array of 
tools to shape the sound of the audio, adjust levels, affect dynamics, and add effects like 
delays, reverb, and distortion. Although my final product is presented in surround sound, 
the vast majority of the time and effort spend on this project went into first crafting stereo 
mixes that I felt were my best work yet. From there, I was comfortable expanding these 
stereo tracks into a surround environment, a process described in the next section. For 
this section, I will break down the mixing process into six main steps, which I more or 
less performed in order: phasing, panning, equalization, compression, delays and reverb, 
and levels. 
Phasing 

I was very meticulous about adjusting phase relations of microphones for this 
project. I ended up going over my work multiple times before I was happy with the 
sound. The "phasing" I am referring to in this discussion is the timing difference as the 
same sound arrives at different microphones, rather than the intentional offsetting of 
timing one sometimes uses as an added effect (Sams, 146; Borwick, 332). If the timing 
of the sound from different microphones is not aligned correctly, unpleasant distortion 
occurs. 



28 



Despite my best efforts to carefully place the overheads equidistant from the kick 
and snare in tracking, I found that small adjustments were still needed in order to get the 
entire kit sounding crisp and clean. I definitely spent the most time on the drums. To 
make phasing decisions, I sometimes measured distances in samples between points 
where a particular transient of a drum hit crossed zero on two different microphones, and 
sometimes just used my ears. I had not made any effort to get the kick or snare centered 
in the room mikes, so this took adjusting. I knew I wanted the reverberant room sound in 
the room mikes to arrive a little later than the more direct sound in the overheads, so I did 
not try to adjust the room mikes directly into phase with the overheads, but I did use my 
ears to adjust their timing so that the cymbals rang out in pleasing way when both the 
overheads and room mikes were playing. I did delay the timing of the close mikes on the 
kick, snare, and toms to get them into phase with the overheads. 

The guitar did not take quite as much time to adjust as the drums. For the 
majority of the tracks, I did not like the sound introduced TLM193s I used as middle- 
distance microphones, so I dropped these and just used the M500s and room mikes. This 
made adjusting phase much easier. 

For the bass, I am still not certain whether I made the right decision in terms of 
adjusting phase on the room mikes. I was definitely able to get the DI track in time with 
the microphones (a delay of only four samples), but after much debate, I ultimately 
settled on a timing for the room mikes that involved one being moved up by a few 
thousand samples, and the other staying untouched. For whatever reason, this seemed to 
improve the clarity of the bass, which I suppose is all that matters. 



29 



Panning 

I had some sense of where I planned to pan the different microphones while I was 
adjusting phase, so these two processes were not entirely separate. If the sound from two 
microphones was going to be hard-panned to opposite sides anyway, I did not make an 
effort align their phase. This does mean my mixes may not be especially mono- 
compatible, but in this day and age, that is a risk I am willing to take. 

I suppose I did not do anything especially out-of-the-ordinary with my stereo 
panning artistically. For the drums, I hard-panned the overheads and room mikes to the 
left and right. The kick and snare mikes were panned dead center, and the close mikes on 
the toms were placed over their perceived image in the stereo field created by the 
overheads and rooms. I oriented the drums from the "drummer's perspective," as 
discussed in Lome Bregitzer's book Secrets of Recording (159). 

For the guitars, I hard-panned nearly every microphone, generally with the close 
mic on one side, and the rooms hard-panned left and right. For the tracks where I 
recorded guitar out of two amps, I usually panned the close mikes hard left and right, and 
either kept the instrument with a wide stereo sound if it was a lead part in the song, or 
turned up one mic relative to the other if I wanted it so sound like it was coming from the 
left or right. 

The bass was panned dead center, with the room mikes hard-panned left and right. 
Same went for the vocals. The keyboards were recorded in stereo, but often needed to be 
positioned to one side, so I simply hard-panned again and turned up the relative level of 
the side I wanted the keys to be on. For the percussion, I used a process similar to what I 



30 



used with the drums, positioning close mikes where they were heard in the stereo field of 

the overheads and rooms. 

Equalization 

I consider equalization the most interesting part of the mixing process. This is 
where I feel that the engineer has the most freedom to impact the overall sound of the 
recording. Equalization, or EQ, is defined as "a circuit or device for frequency-selective 
manipulation of an audio signal's gain" (Sams, 76). EQ is a big part of what makes the 
kick drum thump and the snare crack. It helps to make the different elements and layers 
in the mix sound distinct from one another, and along with level and panning, helps to 
draw attention to import elements, while leaving other in the background. 

As Bregitzer points out, "There is no right or wrong way to begin a mix; each 
engineer is different. The most common technique, however, is to begin with the drum 
tracks" (159). This is what I chose to do. Using the overheads as a reference point, I 
began making adjustments to the various close mikes. I gave the kick a big boost down 
around 50Hz to help the "thump" come through the mix. The snare top mike needed a 
scoop around 500Hz and a low shelf cut below 400Hz to tame the mids and remove some 
unneeded low end. For the snare bottom, I actually boosted the low-mids at 180Hz to 
fatten up the sound, and added a high shelf boost above 2.5kHz. The high torn had bell 
boosts at 700Hz and 300Hz; the low torn had bell boosts at 300Hz and 100Hz. Both had 
some high-end boosts as well. For the overheads, I ended up doing some rather unusual 
EQ that was quite different on the two sides. The ride was sounding too sharp on the 
right, so I cut 7kHz to tame it, but then I decided the whole stereo image sounded best 
when a reciprocal boost was added to the left side at the same frequency. Despite my 



31 



best efforts during tracking and to get both kick and snare centered in the overheads, I 
found that when the snare sounded centered, the kick still sounded heavier on the right. 
So, I cut 75Hz on the right and boosted it on the left. This solved my problem. For the 
room mikes, I simply used a low shelf cut below 400Hz. The U87s otherwise captured 
the room sound quite nicely. 

This was definitely the most elaborate, processing-intensive drum work I have 
done, but I think the final product sounds fairly clean and natural. In the end, I would 
often use the API 550A and 5 5 OB plug-ins for different EQ adjustments before 
compression, then do some very subtle adjustments with the PuigTec EQP1A, use 
another compressor, add tube saturation and plate reverb, and then remove some 
remaining mud with the Waves Q-Clone EQ after that. As explained in the dynamics 
section, I used parallel compression for most of the drums, so sometimes the compressed 
and uncompressed tracks had different EQs on them. Fine tuning the entire kit to my 
liking took an extremely long time. 

EQ'ing the bass was certainly easier than the drums, but still took a few revisions. 
My final decision involved very little EQ on either the DI signal or the miked signals. I 
added the 5 5 OB to each track for a little color, but the only actual adjustment I made was 
to cut below 50Hz on the DI signal to make room for the kick. Thanks to the re- amp 
process through which the bass was recorded, I found that I was able to focus all my 
efforts on obtaining a good sound during tracking, leaving little to be done afterward. 

In terms of the guitars, I generally found that any scratch guitar tracks that I 
decided to use took the most EQ work. I tried to use the same microphones as I did for 
the full guitar tracking sessions, but we only used one amp in the isolation booth (during 



32 



drum tracking), and I also think the lack of room sound is part of what necessitated heavy 
processing in order to make these scratch tracks fit with the rest. The lead guitar in 
"Drama," for example, was all one scratch take recorded during drum tracking. The 
drums were one solid take as well. I made WeezE try to replicate his performance later 
with the full two-amp setup in the room, but nothing came close. That performance had 
brought tears to his eyes when he finished playing, and had to be kept. To make it sound 
fuller and warmer, I ended up using the Waves Q-clone EQ plug-in multiple times, 
essentially alternating a wide cut and a wide boost in the same frequency range for a net 
result that mainly just sounded more processed and colored by harmonics. Without the 
warmth of the room, the scratch guitar also needed a high shelf cut above 12.5kHz to 
avoid sounding harsh. 

Most of the other guitars were pretty easy to mange, and did not need a whole lot 
of EQ. I did, however, use quite a few low shelf and bell cuts around 300-400Hz to 
remove the ubiquitous low-mid mud. Other than that, a few high boosts around 5kHz left 
the guitars sounding bright and edgy where needed. 

For keys, I used little to no EQ across the board. The way I see it, these samples 
are already EQ'd, and much more processing starts to sound strange. Occasionally I 
would reduce some low end in an organ or Rhodes to ensure that it did not compete with 
the bass. I did not use any EQ on the percussion either, as I felt that the wide variety of 
microphones used helped to accumulate a nice, balanced sample of the sounds. 

For the vocals, I really credit the microphone shoot-out we did and the quality of 
the ADK Area 5 1 microphone that we selected for the fact that next to no EQ was 
needed. I tried a number of different adjustments, but in the end, all that was added was a 



33 



small 2.5kHz boost for the vocals on the first four tracks, and for the second set of four, I 
did no EQ whatsoever. The main reason I made an adjustment on the first four was that 
after comparing my mixes to some professional mixes that I admire using iZotope 
oZone's "matching" EQ tool, I found that the only real difference was that my mixes had 
a little less 2kHz. Even after adding that boost to the vocals in the 2kHz range, this was 
still true. The result of this overall EQ difference, to me, is that compared to some tracks, 
my mixes can be played very loudly without sounding harsh in the high mids. The 
expense is that at lower volumes, this 2kHz range is very at grabbing our ears' attention, 
due to the fact that speech carries a lot of important frequency content there, and my 
mixes have a little less total content in this range (Edis-Bates, 1). 

On the whole, I have learned that the particular EQ chosen can be about as 
important as the adjustments made. Many of the EQ plug-ins I have will distinctly color 
the sound immediately upon being added to the track, and the number of possible 
adjustments that can be made varies greatly between plug-ins. The most delicate 
adjustments seem to be high boosts, and for these I found that high-quality emulations of 
analog EQs provided the purest sound. 
Dynamics 

Dynamic processing includes use of gates, expanders, compressors, and limiters, 
as well as forms of tape or tube saturation. The need for this type of processing is 
perhaps less obvious to the average music listener, but many engineers would argue that 
it is the most important part of our work. Renowned mix engineer Andy Johns, for 
example, argues that ". . .compressors can modify the sound more than anything else" 



34 



(Owsinksi, 58). Generally speaking, dynamic processing affects the sound by altering 
how quickly the level of the sound increases or decreases. 

I used the expander in oZone to gate both the top and bottom snare drum, as well 
as the toms. I have always had issues with the hi-hat coming through too loudly in my 
mixes, but careful gating of the snare mikes does help with this issue greatly by removing 
the hi-hat sound from these microphones most of the time. My attack times for the 
expander were extremely short, so that the transients of the hits could come through. The 
releases were a little longer, and their exact length depended on the length that the 
particular drum made a sound (shorter for the snare, longer for the high torn, then even 
longer for the low torn). I have always found it bothersome when I can hear a gate or 
expander in a mix boosting the cymbals up each time it opens, so I went to great lengths 
to minimize this effect for these tracks. The only time I really notice that effect in these 
mixes is in the intro to Drama, where the soft snare hits required a low threshold for the 
expander. I opted not to gate or expand any other tracks. 

Compression, meanwhile, came mostly from the Empirical Labs EL-8 Distressor, 
a wonderful piece of outboard gear that I decided to purchase last year. Although I hear 
that a plug-in emulation exists, I have never come across anything that I like as much as 
the real thing in terms of compression. This device is highly versatile, and for drums in 
particular, has a way of getting big gain reductions while still keeping transients in tact. I 
used the longest attack (11) and nearly the shortest release (0-0.1) pretty much every 
time, so that the early transients came through, and the sound was only compressed 
briefly before returning to its normal dynamic state. These kinds of compressors settings 
are known to help give "punch" to a sound (Shepherd, 137). Just about all the drums 



35 



were processed with a 4:1 ratio, with the other settings in their "neutral" state. I found 
that limiting the number of circuits I ran the drums through on the Distressor (e.g. high- 
pass filters, distortion settings, and other optional circuits on the device) helped to keep 
the cymbals sounding clean and clear across the board. For the bass, I did decide to 
apply the "Distortion 3" setting, and in one case, the "Distortion 2" setting. For the 
drums, the reduction meter normally read between ldB and 4dB. The bass reductions 
were in that range as well, or sometimes a little higher during louder sections. I opted not 
to use the Distressor on the other instruments, mainly because running each channel out 
through the device in real time became very time consuming. 

For the drums, I used a technique called "parallel compression" 
My go-to plug-in compressor was the PuigChild 670, a Waves emulation of a 
Fairchild compressor. I found that this compressor often achieved my desired result with 
the default settings alone. Like the Distressor, this compressor introduces a noticeable 
harmonic coloration of the sound, beyond just the dynamic range reduction, that I found 
pleasing to the ears. I used the 670 on most instruments that didn't go through the 
Distressor, and for the bass, I actually used both the Distressor and the 670, in that order. 
The only other form of plug- in compressor I really ended up using was McDSP's Analog 
Channel on the kick and snare buses for a little extra analog-sounding "fatness." 
Delays and Reverb 

I really did not make very extensive use of reverb in these recordings, because as 
explained in the tracking section, I was able to capture much of the natural reverberation 
in the tracking room, Arts 295. The vast majority of the audible reverb in these 
recordings (like on the drums, for example) is from the room itself. A very small amount 



36 



of plate reverb was added to each instrument and vocal bus using the reverb portion of 
iZotope oZone, but the wet signal was only set at about 5-12% relative to the dry signal, 
depending on the instrument. 

Delays were used, but normally just on the vocals. I used sends form the vocal 
buses to the Massey TD5 delay plug-in. I used the "tap tempo" feature to get the delay in 
time with the track, normally with a simple quarter-note division. Guitar was the only 
other instrument that had delay on it, and in many cases, I was simply accenting delay 
that WeezE already had from his pedals. Two examples of very audible delay are the 
lead guitar in "Drama," with a fairly loud delay sent to the left channel to help balance 
the track, and the vocals in "In the Clouds," where I occasionally automated the delay 
level up quite hot as an effect. 
Levels 

Levels were obviously being adjusted throughout the recording process, but for 
my final levels in the track, my process tended to be as follows. I first made certain I was 
happy with the drum tracks. The challenge was often getting the kick and snare loud 
enough without sending too dry, since I was primarily using the overhead and room 
mikes for reverb. Accomplishing this was the result of adjustments made in all six of the 
categories of mixing presented in this section. Once I was content with my drum levels, I 
added in bass, using the comparative level with the kick as my reference point. Then, I 
added all the guitars, and adjusted their levels to fit with the drums and bass. I made sure 
that rhythm guitars were softer than lead guitars, and sometimes waited to add the guitar 
solos until I after I added the vocals. I then added keyboards, making these pieces fit 
without standing out too much, with a few exceptions where I felt the keys should draw 



37 



attention to themselves. Lastly, I added the vocals, adjusting their level to a range where 
I thought every lyric could be deciphered, but no louder. 

I did do a fair amount of automation, but more so on the first four tracks then the 
second set of four. This had to do mainly with Blake's drumming style, but also with the 
nature of the tracks. Drama required a fair amount of automation with kick and snare in 
order to keep these pieces consistent and sitting well in the mix with overhead and room 
mikes. Terone was so consistent that his drums already sounded compressed to me, and 
with a little "kiss" of added compression from the Distressor, I did not have to automate 
his drums at all. Vocals were automated on every track, sometimes to extremely 
thorough levels. One particular example that took quite a bit of time was in "2012," near 
the end, when he says "In the shining of the full moon/ 1 am always looming in the 
room." The words "full," "looming," and "room" were far louder than the rest of the 
vocals, causing some terrible distortion in the microphone. Compression only made the 
distortion worse. Instead, I carefully painted volume curves to offset these level changes 
as they took place, and simultaneously automated the reverb/delay sends to compensate 
in a way that created a cool effect. In the end, it might be the coolest-sounding part of the 
vocals on that track. 

Mastering 

"Any mastering engineer will tell you that you should not master a recording 
yourself (Bregitzer, 184). Going against this recommendation, I chose to master my 
own project, with the goal in mind of an entirely self-produced thesis project. I took a 
topics course on mastering during my studies at University of Colorado Denver, and this 
gave me some perspective on the mastering process. I arrived at a number of different 



38 



conclusions after taking the course. One of these is that high-quality, highly expensive 
equipment in an acoustically optimized space is obviously going to be better for making 
judgments about the sonic qualities of a mix. Indeed, professional mastering houses tend 
to be better equipped in this way (Owsinski, 86). 

The other conclusion I arrived at, however, is that whenever possible, it's 
generally better to address issues of balancing frequency content (i.e. equalization) at the 
mixing stage. This is for two reasons. For one, when a mix is sounding imbalanced 
(heavy or weak) in a certain frequency range, and an overall EQ adjustment made in 
mastering improves the mix on the whole, it's rare that every component in the mix will 
actually sound better after this adjustment. For example, I gave some of my mixes to a 
former classmate now working for Sony in a studio in San Diego, and he was able to 
identify a build-up around 100Hz in my mixes. I tried cutting some of the 100Hz content 
with an EQ on the master bus, and while this did resolve the issue, I thought some of the 
guitars now sounded thin. So instead, I used the same EQ adjustment (a bell cut with Q 
Clone) on the drum, bass, and some guitar buses, and left the rest untouched. This 
sounded better. The second reason I feel that EQ should be handled in the mixing stage 
is that the vast majority of EQs— even those that are considered "mastering-grade" — 
seem to introduce unpleasant, audible distortion when placed across the master bus. 
Perhaps this is not true for some of the best mastering-grade hardware EQs, but it is for 
all of the ones I own. Mastering EQ seems to improve certain elements of the mix while 
hurting others and degrading clarity, and so I prefer to make all EQ adjustments in the 
mixing phase. 



39 



Thus, "mastering" for me meant purely limiting, using the Waves L3 
Ultramaximizer, matching levels, and timing the beginnings and endings of the tracks. I 
placed the limiter directly across the master bus, rather than bouncing to a new session, as 
I found that the clarity of the audio was improved by avoiding multiple bounces. For this 
project, I used the plug-in's "basic profile," with nearly the shortest release time, for 
reasons described in the dynamics section above. Reduction was generally around 1- 
3dB, and I simply used my ears over the course of multiple listening sessions to get the 
track levels in comparable ranges. 

Surround Mixing 

Once the stereo mixes were done, the surround mixes came along quite quickly. 
Having already achieved the sounds I wanted for each instrument, my challenge 
consisted entirely of placing these different elements and their components across a larger 
sonic space. I began bouncing stems of each instrument and vocal element, creating first 
a two-channel stereo stem, and then three-channel stems with separate left, right, and 
center information. For the three-channel stems, the left and right channels usually 
represented the room mikes, with the center channel representing close mikes. 

This approach afforded me a number of benefits. For one, it kept in tact the tonal 
qualities of each instrument I had worked to hard to achieve during stereo mixing. 
Having already spent so much time on the stereo tracks, it prevented me from the 
temptation to continue tweaking EQ and compressor settings endlessly. Next, the left- 
right-center material from the three-channel stems gave me plenty of flexibility in terms 
of creating perspective in surround sound, because I had separate components of each 
element that sounded close up or far away. I also had the stereo stems with both close 



40 



and room sounds together for when I wanted to keep that combination together the way it 
was in the stereo mix. Lastly, I knew that I would want to check my mixes on the 
school's better-optimized surround systems, and the school does not yet have many of the 
plug-ins that I used. Bouncing stems printed these plug-ins onto the audio, solving this 
problem. 

One could argue that a surround sound mix created from stems, even three- 
channel ones, is not technically a full surround sound mix. I would argue that, in addition 
to providing the benefits described above, this approach was actually sort of a necessity 
for me with my system. By the end, my stereo mixing sessions were already using just 
about all the processing power my computer had. Even when I was mixing on the 
school's computers during my Surround Sound course, sessions less complex than these 
were somewhat of a struggle for the computer to process when converted to surround 
sessions. By bouncing stems, I was able to get a fresh start, and focus primarily on 
panning, levels, and perspective. The way I see it, I was playing with Legos during 
stereo mixing, and Duplos during surround mixing. 

For these surround mixes, I went for a "middle of the band" perspective, as 
opposed to an "audience" perspective (Owsinski, 118; Holman, 7). True to our 
positioning on stage, the drummer is in back, the bassist is front left, vocalist is front and 
center, keys are usually back right, and I often tried to make the guitars sound like they 
are coming from everywhere. For my exact positioning and levels of the different 
channels, refer to the sessions themselves. 



41 



CHAPTER III 
CONCLUSION 

I am certainly not the first to do surround sound with discrete channels, or even to 
suggest that equipment normally meant for stereo can be set up for surround. Many 
authors have discussed these options (Sessions, 67; Holman, 1 12). However, using a 
collection of Pro Tools session as the final presentation medium was not something I 
came across in my research, and I believe this is the innovative piece of my project. 
Chances are, this would not have been a worthwhile distribution method until very 
recently, as the number of home studios has only increased greatly over the last five years 
(Denova, 8). 

Once a user has gotten in the door to surround listening and mixing, then 
adjustments to the setup can be made. We often fuss about ideal speaker setups, levels, 
and placement, when the reality is, very few people have surround sound at all. What 
matters is just getting the additional speakers added, and beginning to enjoy this beautiful 
medium for music. After all, as Owsinski points out in his section on surround sound, 
"Speaker placement is forgiving: Yes there, are standards for placement, but these tend to 
be nonccritical. . .In fact, stereo is far more critical in terms of placement than surround 
sound is" (116). With that in mind, we have all seen stereo speaker setups that are far 
less than ideal, and music is still enjoyed on those. The same goes for surround. 

The next step for broadening the target audience with this surround presentation 
method is to make these mixes available for other DAWs, such as Logic or Ableton. This 
would inspire even more people to create their own surround setups. Also, I would be 
very excited to explore the possibility of obtaining stems of other surround mixes to use 



42 



on my new setup. These would not need to be stems of the individual instruments of 
course, the way I presented my mixes, but simply the four or five separate channels of 
audio used in the mix. If users such as myself could listen to Pink Floyds Dark Side of 
the Moon with our new setups, for instance, this would create an even greater appeal. 

Even if only one new person makes themselves a surround setup as a result of this 
project, I would consider it a success. In the quest for a broader surround sound market, 
we may have to inspire people one population at a time. The same way that people used 
to be skeptical that stereo would ever take off, many feel that surround sound will never 
enjoy broad popular appeal. I think this is equally short-sighted. Over time, surround 
sound will continue to become more accessible, and there is no time like the present to 
begin enjoying the medium. 



43 



BIBLIOGRAPHY 

"Audio Interfaces." Sweetwater.com. N.p., 2012. Web. 13 Oct. 2012. 

<http://www.sweetwater.com/shop/computer-audio/audio_interfaces/>. 

"Complete Production Toolkit." Avid. Avid Technology, Inc., 2012. Web. 13 Oct. 2012. 
<http://shop.avid.com/store/product.do;jsessionid=9A86EF438186A86153BB8F 
ECB378411E.ASTPESD2?product=307036370322096>. 

"Top Recording Arts Schools and Colleges in the U.S." Eduction-Portal.com. Eduction- 
Portal.com, n.d. Web. 13 Oct. 2012. <http://education- 
portal.com/recording_arts_schools.html>. 

Borwick, John. Sound Recording Practice. Oxford: Oxford University, 1980. Print. 

Bregitzer, Lome. Secrets of Recording: Professional Tips, Tools & Techniques. 
Amsterdam: Focal/Elsevier, 2009. Print. 

Collins, Mike. Pro Tools for Music Production: Recording, Editing and Mixing. Oxford: 
Focal, 2004. Print. 

Denova, Antonio. "Audio Production Studios." IBIS World. N.p., Aug. 2012. Web. 10 
Oct. 2012. <http://0- 

clientsl. ibisworld.com. skyline.ucdenver.edu/reports/us/industry/default.aspx?enti 
d=1254>. 

Des. "Recorderman Overhead Drum Mic Technique." Hometracked. Hometracked, 12 
May 2007. Web. 13 Oct. 2012. 

<http://www.hometracked.com/2007/05/12/recorderman-overhead-drum-mic- 
technique/>. 

Edis-Bates, David. "Speech Intelligibility in the Classroom." Edis Education. Edit Trading 
(HK) Limited, 2010. Web. 17 Oct. 2012. 

<http://www.ediseducation.com/component/content/article/36-tips/98-speech- 
intelligibility-in-the-classroom.html>. 

Glasgal, Ralph, and Keith Yates. Ambiophonics: Beyond Surround Sound to Virtual Sonic 
Reality. Northvale, NJ: Ambiophonics Institute, 1995. Print. 

Gottlieb, Gary. Shaping Sound in the Studio and Beyond: Audio Aesthetics and 
Technology. Boston: Thomson Course Technology, 2007. Print. 



44 



Harris, Ben. Home Studio Setup: Everything You Need to Know from Equipment to 
Acoustics. Amsterdam: Focal/Elsever, 2009. Print. 

Holman, Tomlinson. Surround Sound: Up and Running. Amsterdam: Elsevier/Focal, 
2008. Print. 

Janus, Scott. Audio in the 21st Century. Hillsboro, OR: Intel, 2004. Print. 

Keene, Sherman. Practical Techniques for the Recording Engineer. Hollywood, CA: 
Sherman Keene Publications, 1981. Print. 

McCarthy, Bob. Sound Systems: Design and Optimization : Modern Techniques and 

Tools for Sound System Design and Alignment. Amsterdam: Focal/Elsevier, 2010. 
Print. 

Morton, David. Sound Recording: The Life Story of a Technology. Westport, CT: 
Greenwood, 2004. Print. 

Moylan, William. The Art of Recording: Understanding and Crafting the Mix. Boston, 
MA: Focal, 2002. Print. 

Nisbett, Alec. The Sound Studio. Oxford: Focal, 1993. Print. 

Owsinski, Bobby. The Mixing Engineer's Handbook. Boston: Thomson Course 
Technology, 2006. Print. 

Owsinski, Bobby. The Recording Engineer's Handbook. Boston, MA: Artist Pro Pub., 
2005. Print. 

Pohlmann, Ken C. Principles of Digital Audio. 6th ed. New York: McGraw-Hill, 201 1. 
Print. 

Robjohns, Hugh. "You Are Surrounded." Sound on Sound. Sound on Sound, Nov. 2001. 
Web. 13 Oct. 2012. 
<http://www.soundonsound.com/sos/nov01/articles/surround4. asp?print=yes>. 

Rumsey, Francis, and Tim McCormick. Sound and Recording: An Introduction. Oxford: 
Focal, 2006. Print. 

Rumsey, Francis. Spatial Audio. Oxford: Focal, 2001. Print. 

Sams, Howard W. Digital Audio Dictionary. Indianapolis, IN: Prompt Publications, 1999. 
Print. 



45 



Sessions, Ken W. 4 Channel Stereo: From Source to Sound. Blue Ridge Summit, PA: G/L 
Tab, 1974. Print. 

Shepherd, Ashley. Plug-in Power!: The Comprehensive DSP Guide. Boston, MA: 

Thomson Course Technology, 2006. Print. 
Traylor, Joseph G. Physics of Stereo Quad Sound. Ames: Iowa State UP, 1977. 

Print. 



46